DE eng

Search in the Catalogues and Directories

Hits 1 – 15 of 15

1
How Universal is Genre in Universal Dependencies? ...
BASE
Show details
2
Universal Dependencies 2.9
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
3
Universal Dependencies 2.8.1
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
4
Universal Dependencies 2.8
Zeman, Daniel; Nivre, Joakim; Abrams, Mitchell. - : Universal Dependencies Consortium, 2021
BASE
Show details
5
Parsing with Pretrained Language Models, Multiple Datasets, and Dataset Embeddings ...
BASE
Show details
6
On the Effectiveness of Dataset Embeddings in Mono-lingual,Multi-lingual and Zero-shot Conditions ...
BASE
Show details
7
Genre as Weak Supervision for Cross-lingual Dependency Parsing ...
BASE
Show details
8
We Need to Talk About train-dev-test Splits ...
BASE
Show details
9
Genre as Weak Supervision for Cross-lingual Dependency Parsing ...
BASE
Show details
10
DaN+: Danish Nested Named Entities and Lexical Normalization ...
BASE
Show details
11
From Masked Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding ...
BASE
Show details
12
From Masked-Language Modeling to Translation: Non-English Auxiliary Tasks Improve Zero-shot Spoken Language Understanding ...
NAACL 2021 2021; van der Goot, Rob. - : Underline Science Inc., 2021
BASE
Show details
13
Lexical Normalization for Code-switched Data and its Effect on POS-tagging ...
BASE
Show details
14
Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor
In: Computational Linguistics, Vol 46, Iss 2, Pp 487-497 (2020) (2020)
Abstract: Analogies such as man is to king as woman is to X are often used to illustrate the amazing power of word embeddings. Concurrently, they have also been used to expose how strongly human biases are encoded in vector spaces trained on natural language, with examples like man is to computer programmer as woman is to homemaker. Recent work has shown that analogies are in fact not an accurate diagnostic for bias, but this does not mean that they are not used anymore, or that their legacy is fading. Instead of focusing on the intrinsic problems of the analogy task as a bias detection tool, we discuss a series of issues involving implementation as well as subjective choices that might have yielded a distorted picture of bias in word embeddings. We stand by the truth that human biases are present in word embeddings, and, of course, the need to address them. But analogies are not an accurate tool to do so, and the way they have been most often used has exacerbated some possibly non-existing biases and perhaps hidden others. Because they are still widely popular, and some of them have become classics within and outside the NLP community, we deem it important to provide a series of clarifications that should put well-known, and potentially new analogies, into the right perspective.
Keyword: Computational linguistics. Natural language processing; P98-98.5
URL: https://doi.org/10.1162/coli_a_00379
https://doaj.org/article/4b710c571bc84a388f0d878371a3b7f2
BASE
Hide details
15
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction ...
BASE
Show details

Catalogues
0
0
0
0
0
0
0
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
15
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern